Designing Parallel Computers for Self Organizing Maps

نویسنده

  • Tomas Nordström
چکیده

Self organizing maps (SOM) are a class of artificial neural network (ANN) models developed by Kohonen. There are a number of variants, where the self organizing feature map (SOFM) is one of the most used ANN models with unsupervised learning. Learning vector quantifiers (LVQ) is another group of SOM which can be used as very efficient classifiers. SOM have been used in a variety of fields, e.g. robotics, telecommunication and speech recognition. Currently there is a great interest in using parallel computers for ANN models. In this report we describe different ways to implement SOM on parallel computers. We study the design of massively parallel computers, especially computers with simple processing elements, used for SOM calculations. It is found that SOM (like many other ANN models) demands very little of a parallel computer. If support for broadcast and multiplication is included very good performance can be achieved on otherwise modest hardware. 1.0 INTRODUCTION The algorithms we study in this report are Kohonen’s self organizing maps (SOM) and variants of them. These maps have been used in pattern recognition, especially in speech recognition [27], but also in robotics and automatic control [40, 46] and telecommunication tasks [3, 32]. This study is part of a series of reports [43, 44, 49] that shows how well suited bit-serial SIMD computers are for simulating artificial neural networks. As an example of bit-serial SIMD computers, REMAP 3 (reconfigurable, embedded, massively parallel processor project) will be used. As the processing elements are reconfigurable it is possible to include different types of support for different kinds of algorithms. For back-propagation [47] and Hopfield networks [18, 19, 20] a bit-serial multiplier has been found to be essential for the performance [44, 49]. For the implementation of Kanerva’s SDM model [25] the multiplier was not needed, instead a counter was suggested [43]. In this report we try to recognize architectural principles and components that are essential for the efficient calculation of Kohonen’s models. In the next section we describe the background of SOM. After that, two sections discuss implementation considerations and ways to map SOM onto a computer architecture. Then follows a section where some of the existing parallel implementations are discussed. Finally, we draw some conclusions concerning the task of designing parallel computers for SOM. 2.0 BACKGROUND An overview of the different models of self organizing maps and the application areas where they have been used can be found in [26, 28, 29, 30, 31]. Below we only restate the basic models and refer to the references above for more details. 2.1 Competitive Learning In competitive learning [30, 47] the responses from the adaptive nodes (weight vectors) tend to become localized. After appropriate training the nodes specify clusters or codebook vectors that approximate the probability density functions of the input vectors. Algorithm 1 is an example of a competitive learning algorithm. If the spatial relationships of the resulting feature sensitive nodes are not considered we get a zero-order topology map. Algorithm 1 Competitive learning (zero-order topology). 1. Find the node (or weight vector) closest to input x . 2. Make the winning node closer to input. 3. Repeat from step 1 while reducing the learning rate . 2.1.1 Adding Conscience A problem with the algorithm above is that instead of placing the nodes according to the input point density function the nodes are placed as . Having low dimensional input vectors (i.e. small M ) there will be a bias towards the low probability regions. DeSieno [6] has found that adding conscience to the competitive learning algorithm will greatly improve the encoding produced by the map. The idea is that the nodes should be conscientious about how many times they have won, compared to other nodes, see Algorithm 1. That is, every node should win the competition approximately the same wi x tk ( ) wc tk ( ) – min x tk ( ) wi tk ( ) – = i

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Implementations of asynchronous self-organizing maps on OpenMP and MPI parallel computers

In [1], we presented an asynchronous parallel algorithm for self-organizing maps based on a recently defined energy function which leads to a self-organizing map. We generalized the existing stochastic gradient approach to an asynchronous parallel stochastic gradient method for generating a topological map on a distributed computer system (MIMD). We theoretically proved that our algorithm was c...

متن کامل

Implementation of Data Mining Techniques for Meteorological Applications

The CrossGrid project is one of the ongoing research projects involving GRID technology. One of the main tasks in the Meteorological applications package is the implementation of data mining systems for the analysis of operational and reanalysis databases of atmospheric circulation patterns. Previous parallel data mining algorithms reported in the literature focus on parallel computers with pre...

متن کامل

Creating an Order in Distributed Digital Libraries by Integrating Independent Self-Organizing Maps

Digital document libraries are an almost perfect application arena for un-supervised neural networks. This because many of the operations computers have to perform on text documents are classiication tasks based on \noisy" input patterns. The \noise" arises because of the known inaccuracy of mapping natural language to an indexing vocabulary representing the contents of the documents. A growing...

متن کامل

Analog implementation of a Kohonen map with on-chip learning

Kohonen maps are self-organizing neural networks that classify and quantify n-dimensional data into a one- or two-dimensional array of neurons. Most applications of Kohonen maps use simulations on conventional computers, eventually coupled to hardware accelerators or dedicated neural computers. The small number of different operations involved in the combined learning and classification process...

متن کامل

Data Parallel Simulation of Self-organizing Maps on Hypercube Architectures

In this paper a data parallel model for the parallel simulation of self-organizing maps is proposed. This approach is based on the SPMD (single program multiple data) model and utilizes the highly specialized programming environment of the Vienna FORTRAN Compilation System (VFCS) for the parallelization process. It allows an easy development of the simulation system as a sequential program attr...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1992